probe recurrent neural network structure
Using noise to probe recurrent neural network structure and prune synapses
Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. Here we suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. We construct a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, we prove that this rule preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation.
Review for NeurIPS paper: Using noise to probe recurrent neural network structure and prune synapses
Additional Feedback: I provided these comments in the Discussion to argue for its acceptance: I found the authors' responses addressed the issues of "symmetric connections" and "biological plausibility" reasonably well. Both the reviewers who gave "5" agreed that the theoretical derivation is correct. They mostly questioned the biological plausibility or applicability. While symmetric connections are not necessarily biological plausible, many important models and theoretical analysis, for example the works of Hopfield, Sompolinsky etc, often made such simplifying assumptions and produced works that have been influential in theoretical neuroscience at the end. I liked the paper because the idea is interesting, novel and innovative, that the learning and pruning can be local using noise as probe has not been proposed or explored before.
Review for NeurIPS paper: Using noise to probe recurrent neural network structure and prune synapses
All the reviewers agree that the theoretical derivation of the model is correct. The authors satisfactorily replied in the rebuttal to the main concern on the symmetric connections. The contribution of this manuscript is considered innovative and of interest even though the biological plausibility might turn out not be true.
Using noise to probe recurrent neural network structure and prune synapses
Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. Here we suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant.